skip to main content


Search for: All records

Creators/Authors contains: "Fu, H."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In the Hidden-Parameter MDP (HiP-MDP) framework, a family of reinforcement learning tasks is generated by varying hidden parameters specifying the dynamics and reward function for each individual task. The HiP-MDP is a natural model for families of tasks in which meta- and lifelong-reinforcement learning approaches can succeed. Given a learned context encoder that infers the hidden parameters from previous experience, most existing algorithms fall into two categories: model transfer and policy transfer, depending on which function the hidden parameters are used to parameterize. We characterize the robustness of model and policy transfer algorithms with respect to hidden parameter estimation error. We first show that the value function of HiP-MDPs is Lipschitz continuous under certain conditions. We then derive regret bounds for both settings through the lens of Lipschitz continuity. Finally, we empirically corroborate our theoretical analysis by varying the hyper-parameters governing the Lipschitz constants of two continuous control problems; the resulting performance is consistent with our theoretical results. 
    more » « less
    Free, publicly-accessible full text available May 1, 2024
  2. We propose a model-based lifelong reinforcement-learning approach that estimates a hierarchical Bayesian posterior distilling the common structure shared across different tasks. The learned posterior combined with a sample-based Bayesian exploration procedure increases the sample efficiency of learning across a family of related tasks. We first derive an analysis of the relationship between the sample complexity and the initialization quality of the posterior in the finite MDP setting. We next scale the approach to continuous-state domains by introducing a Variational Bayesian Lifelong Reinforcement Learning algorithm that can be combined with recent model-based deep RL methods, and that exhibits backward transfer. Experimental results on several challenging domains show that our algorithms achieve both better forward and backward transfer performance than state-of-the-art lifelong RL methods 
    more » « less
  3. Guichard, P. ; Hamel, V. (Ed.)
    This chapter describes two mechanical expansion microscopy methods with accompanying step-by-step protocols. The first method, mechanically resolved expansion microscopy, uses non-uniform expansion of partially digested samples to provide the imaging contrast that resolves local mechanical properties. Examining bacterial cell wall with this method, we are able to distinguish bacterial species in mixed populations based on their distinct cell wall rigidity and detect cell wall damage caused by various physiological and chemical perturbations. The second method is mechanically locked expansion microscopy, in which we use a mechanically stable gel network to prevent the original polyacrylate network from shrinking in ionic buffers. This method allows us to use anti-photobleaching buffers in expansion microscopy, enabling detection of novel ultra-structures under the optical diffraction limit through super-resolution single molecule localization microscopy on bacterial cells and whole-mount immunofluorescence imaging in thick animal tissues. We also discuss potential applications and assess future directions. 
    more » « less